Use Keras experiment to transfer style Watson Machine Learning icon
Icon

This notebook contains the steps and code required to demonstrate style transfer techique using Watson Machine Learning Service. This notebook introduces commands for getting data, training_definition persistance to Watson Machine Learning repository and model training.

Some familiarity with Python is helpful. This notebook uses Python 3.

Learning goals

In this notebook you learn to work with Watson Machine Learning experiments to train Deep Learning models (Keras).

Contents

  1. Set up the environment
  2. Training definition
  3. Define the experiment
  4. Run the experiment
  5. Results
  6. Summary

1. Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Create a Watson Machine Learning (WML) Service instance (a free plan is offered and information about how to create the instance is here)
  • Create a Cloud Object Storage (COS) instance (a lite plan is offered and information about how to order storage is here).

    • After you create COS instance, go to your COS dashboard.
    • In the Service credentials tab, click New Credential+.
    • Add the inline configuration parameter: {"HMAC":true}, click Add.

      This configuration parameter adds the following section to the instance credentials, (for use later in this notebook):

      "cos_hmac_keys": {
            "access_key_id": "722432c254bc4eaa96e05897bf2779e2",
            "secret_access_key": "286965ac10ecd4de8b44306288c7f5a3e3cf81976a03075c"
       }

In this section:

1.1 Work with Cloud Object Storage (COS)

Import the Boto library, which allows Python developers to manage COS.

In [1]:
# Import the boto library
import ibm_boto3
from ibm_botocore.client import Config
import os
import json
import warnings
import urllib
import time
warnings.filterwarnings('ignore')

Authenticate to COS and define the endpoint you will use.

  1. Enter your COS credentials in the following cell. You can find these credentials in your COS instance dashboard under the Service credentials tab as described in set up the environment.

  2. Go to the Endpoint tab in the COS instance's dashboard to get the endpoint information, for example: s3-api.us-geo.objectstorage.softlayer.net.

In [2]:
# Enter your COS credentials.
cos_credentials = {
  "apikey": "***",
  "cos_hmac_keys": {
    "access_key_id": "***",
    "secret_access_key": "***"
  },
  "endpoints": "https://cos-service.bluemix.net/endpoints",
  "iam_apikey_description": "***",
  "iam_apikey_name": "***",
  "iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
  "iam_serviceid_crn": "***",
  "resource_instance_id": "***"
}

api_key = cos_credentials['apikey']
service_instance_id = cos_credentials['resource_instance_id']
auth_endpoint = 'https://iam.bluemix.net/oidc/token'
# Enter your Endpoint information.
service_endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
In [3]:
# The code was removed by DSX for sharing.

Create the Boto resource by providing type, endpoint_url and credentials.

In [4]:
cos = ibm_boto3.resource('s3',
                         ibm_api_key_id=api_key,
                         ibm_service_instance_id=service_instance_id,
                         ibm_auth_endpoint=auth_endpoint,
                         config=Config(signature_version='oauth'),
                         endpoint_url=service_endpoint)

Create the buckets you will use to store training data and training results.

Note: Bucket names must be unique.

In [5]:
# Create two buckets, style-data-example and style-results-example
buckets = ['style-data-example', 'style-results-example']
for bucket in buckets:
    if not cos.Bucket(bucket) in cos.buckets.all():
        print('Creating bucket "{}"...'.format(bucket))
        try:
            cos.create_bucket(Bucket=bucket)
        except ibm_boto3.exceptions.ibm_botocore.client.ClientError as e:
            print('Error: {}.'.format(e.response['Error']['Message']))

You have now created two new buckets:

  • style-data-example
  • style-results-example

Display a list of buckets for your COS instance to verify that the buckets were created.

In [6]:
# Display the buckets
print(list(cos.buckets.all()))
[s3.Bucket(name='9fff2508-60c5-4e75-835b-210aa00ff010-style-data'), s3.Bucket(name='9fff2508-60c5-4e75-835b-210aa00ff010-style-results'), s3.Bucket(name='creditcardfraud545b518d4ec34681aa7e8e8680b486d5'), s3.Bucket(name='dummytodelete7e254ccd4a1c48b0bc2697e978ef5951'), s3.Bucket(name='mnist-keras-data-example'), s3.Bucket(name='mnist-keras-results-example'), s3.Bucket(name='style-data-example'), s3.Bucket(name='style-results-example'), s3.Bucket(name='think7b9ecb3e82604f03aabf91f274c3364c'), s3.Bucket(name='train-data-experiment-2018-03-28-09-34'), s3.Bucket(name='train-result-experiment-2018-03-28-09-34'), s3.Bucket(name='training-data-38a61fd8-a250-4a82-9277-ef51f88bfadb'), s3.Bucket(name='training-results-38a61fd8-a250-4a82-9277-ef51f88bfadb'), s3.Bucket(name='watsonstudiosamplenotebooks2ec122d7327248859e8d39c03f931082')]

1.2 Download training data and upload it to COS buckets

Download your training data and upload them to the 'training-data' bucket. Then, create a list of links for training dataset.

The following code snippet creates the STYLE_DATA folder and downloads the files from the links to the folder.

Tip: First, use the !pip install wget command to install the wget library:

In [ ]:
!pip install wget
In [8]:
import wget, os

# Create folder
data_dir = 'STYLE_DATA'
if not os.path.isdir(data_dir):
    os.mkdir(data_dir)

links = ['https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5',
         'https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1513px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg',
         'https://upload.wikimedia.org/wikipedia/commons/5/52/Krak%C3%B3w_239a.jpg',
         'https://upload.wikimedia.org/wikipedia/commons/3/3f/Kandinsky%2C_Lyrisches.jpg']

# Download the links to the folder
for i in range(len(links)):
    if 'Gogh' in links[i]: 
        filepath = os.path.join(data_dir, 'van_gogh.jpg')
    elif 'Krak' in links[i]: 
        filepath = os.path.join(data_dir, 'krakow.jpg')
    elif 'Kandinsky' in links[i]:
        filepath = os.path.join(data_dir, 'kandinsky.jpg')
    else:
        filepath = os.path.join(data_dir, os.path.join(links[i].split('/')[-1]))

    if not os.path.isfile(filepath):
        print(links[i])
        urllib.request.urlretrieve(links[i], filepath)

# List the files in the STYLE_DATA folder        
!ls STYLE_DATA
kandinsky.jpg  van_gogh.jpg
krakow.jpg     vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5

Base image: Cracow - main market square

In [9]:
from IPython.display import Image
Image(filename=os.path.join(data_dir, 'krakow.jpg'), width=1000)
Out[9]:

Style image 1: Vincent Van Gogh - Starry Night

In [10]:
Image(filename=os.path.join(data_dir, 'van_gogh.jpg'), width=500)
Out[10]:

Style image 2: Kandinsky Lyrisches

In [11]:
Image(filename=os.path.join(data_dir, 'kandinsky.jpg'), width=600)
Out[11]:

Upload the data files to the created buckets.

In [12]:
bucket_name = buckets[0]
bucket_obj = cos.Bucket(bucket_name)
In [13]:
for filename in os.listdir(data_dir):
    with open(os.path.join(data_dir, filename), 'rb') as data: 
        bucket_obj.upload_file(os.path.join(data_dir, filename), filename)
        print('{} is uploaded.'.format(filename))
vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5 is uploaded.
krakow.jpg is uploaded.
van_gogh.jpg is uploaded.
kandinsky.jpg is uploaded.

Let's see the list of all the buckets and their contents.

In [14]:
for obj in bucket_obj.objects.all():
    print('Object key: {}'.format(obj.key))
    print('Object size (kb): {}'.format(obj.size/1024))
Object key: kandinsky.jpg
Object size (kb): 337.97265625
Object key: krakow.jpg
Object size (kb): 2063.50390625
Object key: van_gogh.jpg
Object size (kb): 833.9755859375
Object key: vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5
Object size (kb): 78256.46875

You are done with COS, and you are now ready to train your model!

1.3 Work with the Watson Machine Learning instance

Load the libraries you need.

In [15]:
import urllib3, requests, json, base64, time, os

Authenticate to the Watson Machine Learning (WML) service on IBM Cloud.

Tip: Authentication information (your credentials) can be found in the Service credentials tab of the service instance that you created on IBM Cloud. If there are no credentials listed for your instance in Service credentials, click New credential (+) and enter the information required to generate new authentication information.

Action: Enter your WML service instance credentials here.

In [16]:
wml_credentials = {
  "url": "https://ibm-watson-ml.mybluemix.net",
  "access_key": "***",
  "username": "***",
  "password": "***",
  "instance_id": "***"
}
In [17]:
# The code was removed by DSX for sharing.

Install the watson-machine-learning-client from pypi.

In [18]:
!rm -rf $PIP_BUILD/watson-machine-learning-client
In [ ]:
!pip install --upgrade watson-machine-learning-client

Import the watson-machine-learning-client and authenticate to the service instance.

In [20]:
from watson_machine_learning_client import WatsonMachineLearningAPIClient
In [21]:
client = WatsonMachineLearningAPIClient(wml_credentials)
In [22]:
print(client.version)
1.0.83

2. Create the training definitions

2.1 Prepare the training definition metadata

Hint: The final effect depends on number of iterations, and that the number of iterations impacts the training time.

In [23]:
#Set the number of iterations.
iters = 1
In [30]:
model_definition_1_metadata = {
            client.repository.DefinitionMetaNames.NAME: "style transfer van gogh",
            client.repository.DefinitionMetaNames.FRAMEWORK_NAME: "tensorflow",
            client.repository.DefinitionMetaNames.FRAMEWORK_VERSION: "1.5",
            client.repository.DefinitionMetaNames.RUNTIME_NAME: "python",
            client.repository.DefinitionMetaNames.RUNTIME_VERSION: "3.5",
            client.repository.DefinitionMetaNames.EXECUTION_COMMAND: "python style_transfer.py krakow.jpg van_gogh.jpg krakow --iter " + str(iters)
            }
In [31]:
model_definition_2_metadata = {
            client.repository.DefinitionMetaNames.NAME: "style transfer kandinsky",
            client.repository.DefinitionMetaNames.FRAMEWORK_NAME: "tensorflow",
            client.repository.DefinitionMetaNames.FRAMEWORK_VERSION: "1.5",
            client.repository.DefinitionMetaNames.RUNTIME_NAME: "python",
            client.repository.DefinitionMetaNames.RUNTIME_VERSION: "3.5",
            client.repository.DefinitionMetaNames.EXECUTION_COMMAND: "python style_transfer.py krakow.jpg kandinsky.jpg krakow --iter " + str(iters)
            }

2.2 Get the sample model definition content files from Git

In [26]:
!rm -rf STYLE.zip
In [27]:
filename_definition = 'STYLE.zip'

if not os.path.isfile(filename_definition):
    !wget https://github.com/pmservice/wml-sample-models/raw/master/keras/style/definition/STYLE.zip

!ls STYLE.zip
--2018-04-10 08:56:10--  https://github.com/pmservice/wml-sample-models/raw/master/keras/style/definition/STYLE.zip
Resolving github.com (github.com)... 192.30.253.113, 192.30.253.112
Connecting to github.com (github.com)|192.30.253.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/pmservice/wml-sample-models/master/keras/style/definition/STYLE.zip [following]
--2018-04-10 08:56:10--  https://raw.githubusercontent.com/pmservice/wml-sample-models/master/keras/style/definition/STYLE.zip
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.48.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.48.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3954 (3.9K) [application/zip]
Saving to: ‘STYLE.zip’

100%[======================================>] 3,954       --.-K/s   in 0s      

2018-04-10 08:56:10 (39.4 MB/s) - ‘STYLE.zip’ saved [3954/3954]

STYLE.zip

2.3 Store the training definition in the WML repository

Store definition 1

In [32]:
definition_details = client.repository.store_definition(filename_definition, model_definition_1_metadata)

definition_url = client.repository.get_definition_url(definition_details)
definition_uid = client.repository.get_definition_uid(definition_details)
print(definition_url)
https://ibm-watson-ml.mybluemix.net/v3/ml_assets/training_definitions/d729e023-9ac1-4558-8de1-7151d003f602

Store definition 2

In [33]:
definition_2_details = client.repository.store_definition(filename_definition, model_definition_2_metadata)

definition_2_url = client.repository.get_definition_url(definition_2_details)
definition_2_uid = client.repository.get_definition_uid(definition_2_details)
print(definition_2_url)
https://ibm-watson-ml.mybluemix.net/v3/ml_assets/training_definitions/2714a024-003f-400b-a13b-8e6a89b109e2

List the stored definitions

In [34]:
client.repository.list_definitions()
------------------------------------  ----------------------------------------  ------------------------  ------------------
GUID                                  NAME                                      CREATED                   FRAMEWORK
ea8c0dd5-8a6b-4e5e-a9cd-74a0c573c1de  MNIST-MLP                                 2018-03-15T10:20:58.646Z  tensorflow
8095a87c-5c94-4f43-b5fd-468b2d72a3c8  MNIST-CNN                                 2018-03-15T10:21:00.033Z  tensorflow
bebc485e-bcc9-45b7-9bce-1ba20acc3b53  MNIST-MLP                                 2018-03-15T10:35:04.997Z  tensorflow
f26539ff-43b5-4100-8915-7b0be22b2211  MNIST-CNN                                 2018-03-15T10:35:05.815Z  tensorflow
bed45da1-6372-4cb1-b889-e0d402cfe62a  MNIST-MLP                                 2018-03-15T15:41:40.335Z  tensorflow
c80aaeef-1c03-4474-8891-535aa6dae81e  MNIST-CNN                                 2018-03-15T15:41:41.174Z  tensorflow
f819f724-8e1c-4903-a04c-eee5dfc048a4  style transfer van gogh                   2018-03-15T20:33:03.354Z  tensorflow
f5c4e713-27d9-4529-a473-10dc88383a52  style transfer kandinsky                  2018-03-15T20:33:05.032Z  tensorflow
af72a944-cc83-4aaf-9152-9f25c1912bf4  style transfer van gogh                   2018-03-15T20:57:13.232Z  tensorflow
4fb3987a-f979-4e1e-a26f-1acbe11a08e7  style transfer kandinsky                  2018-03-15T20:57:13.892Z  tensorflow
1421337a-c2a8-4153-8554-accdc55bbd27  style transfer van gogh                   2018-03-15T21:45:46.889Z  tensorflow
0f8fa74a-f821-475c-85c9-a2b507ed6c63  style transfer kandinsky                  2018-03-15T21:45:48.344Z  tensorflow
cd31e290-6636-4586-8097-e3c3ac7fef8c  style transfer van gogh                   2018-03-16T08:37:18.880Z  tensorflow
76b144fc-100c-45ee-80ae-ef0d44a87cd6  style transfer kandinsky                  2018-03-16T08:37:19.383Z  tensorflow
7dc7e7cc-ae5b-4db2-8705-32174324010a  MNIST-MLP                                 2018-03-16T09:59:57.138Z  tensorflow
6a024dde-d6bc-45d3-8463-895452134c29  MNIST-CNN                                 2018-03-16T09:59:57.888Z  tensorflow
32c64603-13d2-4ec6-b3ca-4ee4650fbd8e  style transfer van gogh                   2018-03-16T10:04:24.779Z  tensorflow
18498298-3e01-43a0-a1f7-955ecc118a4c  style transfer kandinsky                  2018-03-16T10:04:25.661Z  tensorflow
9e446ccc-8b1d-4e46-a549-655f9b16a577  style transfer van gogh                   2018-03-16T10:33:54.588Z  tensorflow
1f54975c-0104-4fcb-910c-ae57e3272d7c  style transfer kandinsky                  2018-03-16T10:33:55.170Z  tensorflow
3501d90f-8599-4154-a8a5-d6ddafac15a8  MNIST-MLP                                 2018-03-16T11:53:19.116Z  tensorflow
7c1b279f-7556-4beb-bee3-5516a0daa433  MNIST-CNN                                 2018-03-16T11:53:19.907Z  tensorflow
d56f3240-b49e-4b30-98e8-fb1ca8ab6dd6  MNIST-MLP                                 2018-03-16T12:35:29.322Z  tensorflow
970df64d-d363-4524-881d-75320928c695  MNIST-CNN                                 2018-03-16T12:35:30.591Z  tensorflow
11ceb998-31d9-424e-94d7-51eb8476f143  training def for product-line-prediction  2018-03-16T18:08:30.465Z  mllib
960d8f27-e636-438d-a8b9-65cc9033e5f6  training def for drug-selection           2018-03-16T18:10:24.191Z  mllib
60994f57-61e6-4bcc-b025-18622d22c98f  My definition name                        2018-03-28T07:55:40.796Z  tensorflow
06fa6000-9b1b-4cf0-8932-f35cf449361e  Tensorflow - distributed                  2018-03-28T09:45:06.201Z  tensorflow
a473639d-4280-4157-b360-7df30f965788  Tensorflow - horovod                      2018-03-28T09:45:07.779Z  tensorflow-horovod
c5988360-4241-4565-8e6c-05fde3c8b27e  Tensorflow - ddl                          2018-03-28T09:45:08.735Z  tensorflow-ddl
c94e68e1-a618-4904-b7a5-647c15cc9330  my spark model                            2018-03-28T14:48:58.903Z  mllib
a4e7c9bd-07b6-46a9-a9a8-558650f99738  Best Heart Drug Selection                 2018-03-29T08:35:26.900Z  mllib
28184c47-75d5-4e57-beb8-e520262ce7cd  Product line model                        2018-03-29T09:21:36.318Z  mllib
d9665a07-7336-493b-b5e1-93da4ad060a8  Sentiment Prediction Model                2018-03-29T10:37:14.292Z  mllib
2cd6e7b0-d15f-4f80-8d6d-5b46e098bcf0  style transfer van gogh                   2018-04-06T09:11:37.176Z  tensorflow
3f69061e-fa05-4300-9daa-b79af1b2cb2f  style transfer kandinsky                  2018-04-06T09:11:46.417Z  tensorflow
900521a7-2f7d-43e1-8c92-34372853bdfb  style transfer kandinsky                  2018-04-06T09:12:35.619Z  tensorflow
42cbfe26-963d-4da3-91da-7d465089247b  style transfer van gogh                   2018-04-06T11:19:09.197Z  tensorflow
bafd1614-1e02-488a-96b2-52f7390a1e3e  style transfer kandinsky                  2018-04-06T11:19:32.500Z  tensorflow
8d682950-6945-48a6-8f93-a76c8de44cee  style transfer van gogh                   2018-04-10T08:56:16.173Z  tensorflow
5a2bd3bd-c601-4def-b8bd-aa79b6751052  style transfer kandinsky                  2018-04-10T08:56:17.993Z  tensorflow
d729e023-9ac1-4558-8de1-7151d003f602  style transfer van gogh                   2018-04-10T08:56:52.438Z  tensorflow
2714a024-003f-400b-a13b-8e6a89b109e2  style transfer kandinsky                  2018-04-10T08:56:54.116Z  tensorflow
------------------------------------  ----------------------------------------  ------------------------  ------------------

3. Create the experiment definition

Get a list of supported configuration parameters.

In [35]:
client.repository.ExperimentMetaNames.show()
--------------------------  ----  --------
META_PROP NAME              TYPE  REQUIRED
NAME                        str   Y
TAGS                        list  N
DESCRIPTION                 str   N
AUTHOR_NAME                 str   N
EVALUATION_METHOD           str   N
EVALUATION_METRICS          list  N
TRAINING_REFERENCES         list  Y
TRAINING_DATA_REFERENCE     dict  Y
TRAINING_RESULTS_REFERENCE  dict  Y
--------------------------  ----  --------

Create an experiment, which will train two models based on previously stored definitions.

In [36]:
TRAINING_DATA_REFERENCE = {
                            "connection": {
                                "endpoint_url": service_endpoint,
                                "aws_access_key_id": cos_credentials['cos_hmac_keys']['access_key_id'],
                                "aws_secret_access_key": cos_credentials['cos_hmac_keys']['secret_access_key']
                            },
                            "source": {
                                "bucket": buckets[0],
                            },
                            "type": "s3"
                        }
In [37]:
TRAINING_RESULTS_REFERENCE = {
                            "connection": {
                                "endpoint_url": service_endpoint,
                                "aws_access_key_id": cos_credentials['cos_hmac_keys']['access_key_id'],
                                "aws_secret_access_key": cos_credentials['cos_hmac_keys']['secret_access_key']
                            },
                            "target": {
                                "bucket": buckets[1],
                            },
                            "type": "s3"
                        }
In [40]:
experiment_metadata = {
            client.repository.ExperimentMetaNames.NAME: "STYLE experiment",
            client.repository.ExperimentMetaNames.TRAINING_DATA_REFERENCE: TRAINING_DATA_REFERENCE,
            client.repository.ExperimentMetaNames.TRAINING_RESULTS_REFERENCE: TRAINING_RESULTS_REFERENCE,
            client.repository.ExperimentMetaNames.TRAINING_REFERENCES: [
                        {
                            "name": "van gogh - cracow",
                            "training_definition_url": definition_url,
                            "compute_configuration": {"name": "k80x4"}
                        },
                        {
                            "name": "kandinsky - cracow",
                            "training_definition_url": definition_2_url,
                            "compute_configuration": {"name": "k80x4"}
                        },
                    ],
                }

Store the experiment in the WML repository.

In [41]:
# Store the experiment and display the experiment_uid.
experiment_details = client.repository.store_experiment(meta_props=experiment_metadata)

experiment_uid = client.repository.get_experiment_uid(experiment_details)
print(experiment_uid)
6d2e2356-076a-4fa4-99f4-c76721b61353

List the stored experiments.

In [42]:
client.repository.list_experiments()
------------------------------------  ----------------------------  ------------------------
GUID                                  NAME                          CREATED
0c8803bf-122f-4066-be77-39d6aef4de3d  MNIST experiment              2018-03-15T10:35:54.116Z
207928e4-2634-48b2-80d1-acbc3eed3db2  MNIST experiment              2018-03-15T10:21:23.857Z
298cc42b-3138-43a2-a690-bf18292e7f65  STYLE experiment              2018-03-16T10:34:03.736Z
2dabe07b-bc60-4bff-a727-1554d9fbd267  STYLE experiment              2018-04-06T11:19:45.272Z
3b274502-a203-430f-b2b2-903f1e713d40  MNIST experiment              2018-03-16T10:00:46.078Z
496a28ea-7887-4b3e-b4c0-28d21b831333  MNIST experiment              2018-03-15T15:41:47.219Z
4bae6fcb-f490-45b7-bdaf-559030b6a0f9  STYLE experiment              2018-03-15T21:46:05.795Z
6d2e2356-076a-4fa4-99f4-c76721b61353  STYLE experiment              2018-04-10T09:01:04.423Z
ceb102b9-a05a-4e42-8212-4ac1e0fe115e  STYLE experiment              2018-03-16T08:37:30.628Z
d763b2eb-1b9d-440e-8c25-09a0f06ee017  Distributed MNIST experiment  2018-03-28T09:46:25.089Z
d8cf0def-3a2c-4cee-83b9-79b5eedff2eb  STYLE experiment              2018-03-15T20:57:23.876Z
dc222702-508e-4c27-aae5-a0af07c3a810  MNIST experiment              2018-03-16T12:35:40.804Z
f35cea6e-4461-45d2-b1b7-d2250fc1a59e  STYLE experiment              2018-03-15T20:35:12.928Z
ff7bada4-7530-4c1d-8de1-59684727e021  STYLE experiment              2018-03-16T10:04:35.729Z
------------------------------------  ----------------------------  ------------------------

Get the experiment definition details

In [43]:
details = client.repository.get_experiment_details(experiment_uid)

4. Run the experiment

Tip: To run the experiment in the background, set the optional parameter asynchronous=True (or remove it)

In [44]:
experiment_run_details = client.experiments.run(experiment_uid, asynchronous=False)

#########################################################

Running '6d2e2356-076a-4fa4-99f4-c76721b61353' experiment

#########################################################


Experiment run uid: eadcdff7-291d-4a8b-8af0-d3c2ee49b958

0%   - Processing training-YnLXG_mig (1/2): experiment_state=pending, training_state=pending
0%   - Processing training-YnLXG_mig (1/2): experiment_state=pending, training_state=running
0%   - Processing training-YnLXG_mig (1/2): experiment_state=running, training_state=running
0%   - Processing training-YnLXG_mig (1/2): experiment_state=running, training_state=completed
100% - Processing training-YnLXG_mig (2/2): experiment_state=completed, training_state=completed
100% - Finished processing training runs: experiment_state=completed


--------------------------------------------------------------------
Run of '6d2e2356-076a-4fa4-99f4-c76721b61353' finished successfully.
--------------------------------------------------------------------


As you can see, the experiment run has finished.

Get the experiment run UID.

In [45]:
experiment_run_id = client.experiments.get_run_uid(experiment_run_details)
print(experiment_run_id)
eadcdff7-291d-4a8b-8af0-d3c2ee49b958

Get the run details.

The code in the following cell gets details about a particular experiment run.

In [46]:
run_details = client.experiments.get_run_details(experiment_run_id)

Get the experiment run status.

Call client.experiments.get_status(run_uid) to check the experiment run status. This is useful when you run an experiment in the background.

In [47]:
status = client.experiments.get_status(experiment_run_id)
print(status)
{'current_iteration': 1, 'submitted_at': '2018-04-10T09:01:15Z', 'state': 'completed', 'current_at': '2018-04-10T09:01:15Z'}

Monitor the experiment run.

Call client.experiments.monitor_logs(run_uid) to monitor the experiment run. This method streams the training logs content to the console.

client.experiments.monitor_logs(experiment_run_id)

List the training runs triggered by experiment run.

In [44]:
client.experiments.list_training_runs(experiment_run_id)
------------------  ------------------  ---------  --------------------  --------------------  -----------
GUID (training)     NAME                STATE      SUBMITTED             FINISHED              PERFORMANCE
training-0PsNqQgig  van gogh - cracow   completed  2018-03-16T10:34:42Z  2018-03-16T10:37:10Z
training-GTwH3Qgig  kandinsky - cracow  completed  2018-03-16T10:34:43Z  2018-03-16T10:37:07Z
------------------  ------------------  ---------  --------------------  --------------------  -----------
In [ ]:
As you can see, two training runs completed. 
In [46]:
# List the training uids.
training_uids = client.experiments.get_training_uids(experiment_run_details)
print(training_uids)
['training-0PsNqQgig', 'training-GTwH3Qgig']

5. Results - transferred styles images

In [47]:
bucket_name = buckets[1]
bucket_obj = cos.Bucket(bucket_name)
In [51]:
transfered_images = []

for uid in training_uids:
    obj = bucket_obj.Object(uid + '/transfered_images/krakow_at_iteration_' + str(iters-1) + '.png')
    filename = 'krakow_transfered_' + str(uid) + '.jpg'
    transfered_images.append(filename)
    with open(filename, 'wb') as data:
        obj.download_fileobj(data)
    print(filename)
krakow_transfered_training-0PsNqQgig.jpg
krakow_transfered_training-GTwH3Qgig.jpg

Cracow

Have a look at the original picture again.

In [52]:
Image(filename=os.path.join(data_dir, 'krakow.jpg'), width=1000)
Out[52]:

Cracow + Van Gogh

Display the picture after Van Gogh style has been applied.

In [53]:
Image(filename=transfered_images[0], width=1000)
Out[53]:

Cracow + Kandinsky

Display the picture after Kandinsky style has been applied.

In [54]:
Image(filename=transfered_images[1], width=1000)
Out[54]:

6. Summary

You successfully completed this notebook! You learned how to use watson-machine-learning-client to run experiments. Check out our:

Citations

Author

Lukasz Cmielowski, PhD, is an Automation Architect and Data Scientist at IBM with a track record of developing enterprise-level applications that substantially increases clients' ability to turn data into actionable knowledge.

Copyright © 2018 IBM. This notebook and its source code are released under the terms of the MIT License.

Love this notebook? Don't have an account yet?
Share it with your colleagues and help them discover the power of Watson Studio! Sign Up